37 research outputs found

    Universal Forgery and Multiple Forgeries of MergeMAC and Generalized Constructions

    Get PDF
    This article presents universal forgery and multiple forgeries against MergeMAC that has been recently proposed to fit scenarios where bandwidth is limited and where strict time constraints apply. MergeMAC divides an input message into two parts, mm~m\|\tilde{m}, and its tag is computed by F(P1(m)P2(m~))\mathcal{F}( \mathcal{P}_1(m) \oplus \mathcal{P}_2(\tilde{m}) ), where P1\mathcal{P}_1 and P2\mathcal{P}_2 are PRFs and F\mathcal{F} is a public function. The tag size is 64 bits. The designers claim 6464-bit security and imply a risk of accepting beyond-birthday-bound queries. This paper first shows that it is inevitable to limit the number of queries up to the birthday bound, because a generic universal forgery against CBC-like MAC can be adopted to MergeMAC. Afterwards another attack is presented that works with a very few number of queries, 3 queries and 258.62^{58.6} computations of F\mathcal{F}, by applying a preimage attack against weak F\mathcal{F}, which breaks the claimed security. The analysis is then generalized to a MergeMAC variant where F\mathcal{F} is replaced with a one-way function H\mathcal{H}. Finally, multiple forgeries are discussed in which the attacker\u27s goal is to improve the ratio of the number of queries to the number of forged tags. It is shown that the attacker obtains tags of q2q^2 messages only by making 2q12q-1 queries in the sense of existential forgery, and this is tight when q2q^2 messages have a particular structure. For universal forgery, tags for 3q3q arbitrary chosen messages can be obtained by making 5q5q queries

    Data-Independent Memory Hard Functions: New Attacks and Stronger Constructions

    Get PDF
    Memory-hard functions (MHFs) are a key cryptographic primitive underlying the design of moderately expensive password hashing algorithms and egalitarian proofs of work. Over the past few years several increasingly stringent goals for an MHF have been proposed including the requirement that the MHF have high sequential space-time (ST) complexity, parallel space-time complexity, amortized area-time (aAT) complexity and sustained space complexity. Data-Independent Memory Hard Functions (iMHFs) are of special interest in the context of password hashing as they naturally resist side-channel attacks. iMHFs can be specified using a directed acyclic graph (DAG) GG with N=2nN=2^n nodes and low indegree and the complexity of the iMHF can be analyzed using a pebbling game. Recently, Alwen et al. [CCS 17] constructed a DAG called DRSample that has aAT complexity at least Ω(N2/logN)\Omega\left( N^2/\log N\right). Asymptotically DRSample outperformed all prior iMHF constructions including Argon2i, winner of the password hashing competition (aAT cost O(N1.767)\mathcal{O}\left(N^{1.767}\right)), though the constants in these bounds are poorly understood. We show that the greedy pebbling strategy of Boneh et al. [ASIACRYPT 16] is particularly effective against DRSample e.g., the aAT cost is O(N2/logN)\mathcal{O}\left( N^2/\log N\right). In fact, our empirical analysis {\em reverses} the prior conclusion of Alwen et al. that DRSample provides stronger resistance to known pebbling attacks for practical values of N224N \leq 2^{24}. We construct a new iMHF candidate (DRSample+BRG) by using the bit-reversal graph to extend DRSample. We then prove that the construction is asymptotically optimal under every MHF criteria, and we empirically demonstrate that our iMHF provides the best resistance to {\em known} pebbling attacks. For example, we show that any parallel pebbling attack either has aAT cost ω(N2)\omega(N^2) or requires at least Ω(N)\Omega(N) steps with Ω(N/logN)\Omega(N/\log N) pebbles on the DAG. This makes our construction the first practical iMHF with a strong sustained space-complexity guarantee and immediately implies that any parallel pebbling has aAT complexity Ω(N2/logN)\Omega(N^2/\log N). We also prove that any sequential pebbling (including the greedy pebbling attack) has aAT cost Ω(N2)\Omega\left( N^2\right) and, if a plausible conjecture holds, any parallel pebbling has aAT cost Ω(N2loglogN/logN)\Omega(N^2 \log \log N/\log N) --- the best possible bound for an iMHF

    Can Caesar Beat Galois?

    Get PDF
    The Competition for Authenticated Encryption: Security, Applicability and Robustness (CAESAR) has as its official goal to “identify a portfolio of authenticated ciphers that offer advantages over [the Galois-Counter Mode with AES]” and are suitable for widespread adoption.” Each of the 15 candidate schemes competing in the currently ongoing 3rd round of CAESAR must clearly declare its security claims, i.e. whether it can tolerate nonce misuse, and what is the maximal data complexity for which security is guaranteed. These claims appear to be valid for all 15 candidates. Interpreting “Robustness” in CAESAR as the ability to mitigate damage when security guarantees are void, we describe attacks with 64-bit complexity or above, and/or with nonce reuse for each of the 15 candidates. We then classify the candidates depending on how powerful does an attacker need to be to mount (semi-)universal forgeries, decryption attacks, or key recoveries. Rather than invalidating the security claims of any of the candidates, our results provide an additional criterion for evaluating the security that candidates deliver, which can be useful for e.g. breaking ties in the final CAESAR discussions

    Proof of Space from Stacked Expanders

    Get PDF
    Recently, proof of space (PoS) has been suggested as a more egalitarian alternative to the traditional hash-based proof of work. In PoS, a prover proves to a verifier that it has dedicated some specified amount of space. A closely related notion is memory-hard functions (MHF), functions that require a lot of memory/space to compute. While making promising progress, existing PoS and MHF have several problems. First, there are large gaps between the desired space-hardness and what can be proven. Second, it has been pointed out that PoS and MHF should require a lot of space not just at some point, but throughout the entire computation/protocol; few proposals considered this issue. Third, the two existing PoS constructions are both based on a class of graphs called superconcentrators, which are either hard to construct or add a logarithmic factor overhead to efficiency. In this paper, we construct PoS from stacked expander graphs. Our constructions are simpler, more efficient and have tighter provable space-hardness than prior works. Our results also apply to a recent MHF called Balloon hash. We show Balloon hash has tighter space-hardness than previously believed and consistent space-hardness throughout its computation

    Protein-Protein Interactions of Tandem Affinity Purified Protein Kinases from Rice

    Get PDF
    Eighty-eight rice (Oryza sativa) cDNAs encoding rice leaf expressed protein kinases (PKs) were fused to a Tandem Affinity Purification tag (TAP-tag) and expressed in transgenic rice plants. The TAP-tagged PKs and interacting proteins were purified from the T1 progeny of the transgenic rice plants and identified by tandem mass spectrometry. Forty-five TAP-tagged PKs were recovered in this study and thirteen of these were found to interact with other rice proteins with a high probability score. In vivo phosphorylated sites were found for three of the PKs. A comparison of the TAP-tagged data from a combined analysis of 129 TAP-tagged rice protein kinases with a concurrent screen using yeast two hybrid methods identified an evolutionarily new rice protein that interacts with the well conserved cell division cycle 2 (CDC2) protein complex

    Ubiquitous Weak-key Classes of BRW-polynomial Function

    Get PDF
    BRW-polynomial function is suggested as a preferred alternative of polynomial function, owing to its high efficiency and seemingly non-existent weak keys. In this paper we investigate the weak-key issue of BRW-polynomial function as well as BRW-instantiated cryptographic schemes. Though, in BRW-polynomial evaluation, the relationship between coefficients and input blocks is indistinct, we give out a recursive algorithm to compute another (2v+11)(2^{v+1}-1)-block message, for any given (2v+11)(2^{v+1}-1)-block message, such that their output-differential through BRW-polynomial evaluation, equals any given ss-degree polynomial, where vlog2(s+1)v\ge\lfloor\log_2(s+1)\rfloor. With such algorithm, we illustrate that any non-empty key subset is a weak-key class in BRW-polynomial function. Moreover any key subset of BRW-polynomial function, consisting of at least 22 keys, is a weak-key class in BRW-instantiated cryptographic schemes like the Wegman-Carter scheme, the UHF-then-PRF scheme, DCT, etc. Especially in the AE scheme DCT, its confidentiality, as well as its integrity, collapses totally, when using weak keys of BRW-polynomial function, which are ubiquitous

    Cryptanalysis of OCB<sub>2</sub>:Attacks on Authenticity and Confidentiality

    Get PDF
    We present practical attacks on OCB2. This mode of operation of a blockcipher was designed with the aim to provide particularly efficient and provably-secure authenticated encryption services, and since its proposal about 15 years ago it belongs to the top performers in this realm. OCB2 was included in an ISO standard in 2009. An internal building block of OCB2 is the tweakable blockcipher obtained by operating a regular blockcipher in XEX^\ast mode. The latter provides security only when evaluated in accordance with certain technical restrictions that, as we note, are not always respected by OCB2. This leads to devastating attacks against OCB2\u27s security promises: We develop a range of very practical attacks that, amongst others, demonstrate universal forgeries and full plaintext recovery. We complete our report with proposals for (provably) repairing OCB2. To our understanding, as a direct consequence of our findings, OCB2 is currently in a process of removal from ISO standards. Our attacks do not apply to OCB1 and OCB3, and our privacy attacks on OCB2 require an active adversary

    Balloon Hashing: A Memory-Hard Function Providing Provable Protection Against Sequential Attacks

    Get PDF
    We present the Balloon password-hashing algorithm. This is the first practical cryptographic hash function that: (i) has proven memory-hardness properties in the random-oracle model, (ii) uses a password-independent access pattern, and (iii) meets or exceeds the performance of the best heuristically secure password-hashing algorithms. Memory-hard functions require a large amount of working space to evaluate efficiently and when used for password hashing, they dramatically increase the cost of offline dictionary attacks. In this work, we leverage a previously unstudied property of a certain class of graphs (“random sandwich graphs”) to analyze the memory-hardness of the Balloon algorithm. The techniques we develop are general: we also use them to give a proof of security of the scrypt and Argon2i password-hashing functions in the random-oracle model. Our security analysis uses a sequential model of computation, which essentially captures attacks that run on single-core machines. Recent work shows how to use massively parallel special-purpose machines (e.g., with hundreds of cores) to attack Balloon and other memory-hard functions. We discuss these important attacks, which are outside of our adversary model, and propose practical defenses against them. To motivate the need for security proofs in the area of password hashing, we demonstrate and implement a practical attack against Argon2i that successfully evaluates the function with less space than was previously claimed possible. Finally, we use experimental results to compare the performance of the Balloon hashing algorithm to other memory-hard functions

    Indifferentiable Authenticated Encryption

    Get PDF
    We study Authenticated Encryption with Associated Data (AEAD) from the viewpoint of composition in arbitrary (single-stage) environments. We use the indifferentiability framework to formalize the intuition that a “good” AEAD scheme should have random ciphertexts subject to decryptability. Within this framework, we can then apply the indifferentiability composition theorem to show that such schemes offer extra safeguards wherever the relevant security properties are not known, or cannot be predicted in advance, as in general-purpose crypto libraries and standards. We show, on the negative side, that generic composition (in many of its configurations) and well-known classical and recent schemes fail to achieve indifferentiability. On the positive side, we give a provably indifferentiable Feistel-based construction, which reduces the round complexity from at least 6, needed for blockciphers, to only 3 for encryption. This result is not too far off the theoretical optimum as we give a lower bound that rules out the indifferentiability of any construction with less than 2 rounds
    corecore